91 research outputs found

    An overview of the design and methods for retrieving high-quality studies for clinical care

    Get PDF
    BACKGROUND: With the information explosion, the retrieval of the best clinical evidence from large, general purpose, bibliographic databases such as MEDLINE can be difficult. Both researchers conducting systematic reviews and clinicians faced with a patient care question are confronted with the daunting task of searching for the best medical literature in electronic databases. Many have advocated the use of search filters or "hedges" to assist with the searching process. The purpose of this report is to describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronics databases. OBJECTIVE: To describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronic databases. DESIGN: An analytic survey comparing hand searches of 170 journals in the year 2000 with retrievals from MEDLINE, EMBASE, CINAHL, and PsycINFO for candidate search terms and combinations. The sensitivity, specificity, precision, and accuracy of unique search terms and combinations of search terms were calculated. CONCLUSION: A study design modeled after a diagnostic testing procedure with a gold standard (the hand search of the literature) and a test (the search terms) is an effective way of developing, testing, and validating search strategies for use in large electronic databases

    Optimal search strategies for detecting cost and economic studies in EMBASE

    Get PDF
    BACKGROUND: Economic evaluations in the medical literature compare competing diagnosis or treatment methods for their use of resources and their expected outcomes. The best evidence currently available from research regarding both cost and economic comparisons will continue to expand as this type of information becomes more important in today's clinical practice. Researchers and clinicians need quick, reliable ways to access this information. A key source of this type of information is large bibliographic databases such as EMBASE. The objective of this study was to develop search strategies that optimize the retrieval of health costs and economics studies from EMBASE. METHODS: We conducted an analytic survey, comparing hand searches of journals with retrievals from EMBASE for candidate search terms and combinations. 6 research assistants read all issues of 55 journals indexed by EMBASE for the publishing year 2000. We rated all articles using purpose and quality indicators and categorized them into clinically relevant original studies, review articles, general papers, or case reports. The original and review articles were then categorized for purpose (i.e., cost and economics and other clinical topics) and depending on the purpose as 'pass' or 'fail' for methodologic rigor. Candidate search strategies were developed for economic and cost studies, then run in the 55 EMBASE journals, the retrievals being compared with the hand search data. The sensitivity, specificity, precision, and accuracy of the search strategies were calculated. RESULTS: Combinations of search terms for detecting both cost and economic studies attained levels of 100% sensitivity with specificity levels of 92.9% and 92.3% respectively. When maximizing for both sensitivity and specificity, the combination of terms for detecting cost studies (sensitivity) increased 2.2% over the single term but at a slight decrease in specificity of 0.9%. The maximized combination of terms for economic studies saw no change in sensitivity from the single term and only a 0.1% increase in specificity. CONCLUSION: Selected terms have excellent performance in the retrieval of studies of health costs and economics from EMBASE

    Optimal search strategies for identifying sound clinical prediction studies in EMBASE

    Get PDF
    BACKGROUND: Clinical prediction guides assist clinicians by pointing to specific elements of the patient's clinical presentation that should be considered when forming a diagnosis, prognosis or judgment regarding treatment outcome. The numbers of validated clinical prediction guides are growing in the medical literature, but their retrieval from large biomedical databases remains problematic and this presents a barrier to their uptake in medical practice. We undertook the systematic development of search strategies ("hedges") for retrieval of empirically tested clinical prediction guides from EMBASE. METHODS: An analytic survey was conducted, testing the retrieval performance of search strategies run in EMBASE against the gold standard of hand searching, using a sample of all 27,769 articles identified in 55 journals for the 2000 publishing year. All articles were categorized as original studies, review articles, general papers, or case reports. The original and review articles were then tagged as 'pass' or 'fail' for methodologic rigor in the areas of clinical prediction guides and other clinical topics. Search terms that depicted clinical prediction guides were selected from a pool of index terms and text words gathered in house and through request to clinicians, librarians and professional searchers. A total of 36,232 search strategies composed of single and multiple term phrases were trialed for retrieval of clinical prediction studies. The sensitivity, specificity, precision, and accuracy of search strategies were calculated to identify which were the best. RESULTS: 163 clinical prediction studies were identified, of which 69 (42.3%) passed criteria for scientific merit. A 3-term strategy optimized sensitivity at 91.3% and specificity at 90.2%. Higher sensitivity (97.1%) was reached with a different 3-term strategy, but with a 16% drop in specificity. The best measure of specificity (98.8%) was found in a 2-term strategy, but with a considerable fall in sensitivity to 60.9%. All single term strategies performed less well than 2- and 3-term strategies. CONCLUSION: The retrieval of sound clinical prediction studies from EMBASE is supported by several search strategies

    Sample size determination for bibliographic retrieval studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Research for developing search strategies to retrieve high-quality clinical journal articles from MEDLINE is expensive and time-consuming. The objective of this study was to determine the minimal number of high-quality articles in a journal subset that would need to be hand-searched to update or create new MEDLINE search strategies for treatment, diagnosis, and prognosis studies.</p> <p>Methods</p> <p>The desired width of the 95% confidence intervals (W) for the lowest sensitivity among existing search strategies was used to calculate the number of high-quality articles needed to reliably update search strategies. New search strategies were derived in journal subsets formed by 2 approaches: random sampling of journals and top journals (having the most high-quality articles). The new strategies were tested in both the original large journal database and in a low-yielding journal (having few high-quality articles) subset.</p> <p>Results</p> <p>For treatment studies, if W was 10% or less for the lowest sensitivity among our existing search strategies, a subset of 15 randomly selected journals or 2 top journals were adequate for updating search strategies, based on each approach having at least 99 high-quality articles. The new strategies derived in 15 randomly selected journals or 2 top journals performed well in the original large journal database. Nevertheless, the new search strategies developed using the random sampling approach performed better than those developed using the top journal approach in a low-yielding journal subset. For studies of diagnosis and prognosis, no journal subset had enough high-quality articles to achieve the expected W (10%).</p> <p>Conclusion</p> <p>The approach of randomly sampling a small subset of journals that includes sufficient high-quality articles is an efficient way to update or create search strategies for high-quality articles on therapy in MEDLINE. The concentrations of diagnosis and prognosis articles are too low for this approach.</p

    EMBASE search strategies for identifying methodologically sound diagnostic studies for use by clinicians and researchers

    Get PDF
    BACKGROUND: Accurate diagnosis by clinicians is the cornerstone of decision making for recommending clinical interventions. The current best evidence from research concerning diagnostic tests changes unpredictably as science advances. Both clinicians and researchers need dependable access to published evidence concerning diagnostic accuracy. Bibliographic databases such as EMBASE provide the most widely available entrée to this literature. The objective of this study was to develop search strategies that optimize the retrieval of methodologically sound diagnostic studies from EMBASE for use by clinicians. METHODS: An analytic survey was conducted, comparing hand searches of 55 journals with retrievals from EMBASE for 4,843 candidate search terms and 6,574 combinations. All articles were rated using purpose and quality indicators, and clinically relevant diagnostic accuracy articles were categorized as 'pass' or 'fail' according to explicit criteria for scientific merit. Candidate search strategies were run in EMBASE, the retrievals being compared with the hand search data. The proposed search strategies were treated as "diagnostic tests" for sound studies and the manual review of the literature was treated as the "gold standard." The sensitivity, specificity, precision and accuracy of the search strategies were calculated. RESULTS: Of the 433 articles about diagnostic tests, 97 (22.4%) met basic criteria for scientific merit. Combinations of search terms reached peak sensitivities of 100% with specificity at 70.4%. Compared with best single terms, best multiple terms increased sensitivity for sound studies by 8.2% (absolute increase), but decreased specificity (absolute decrease 6%) when sensitivity was maximized. When terms were combined to maximize specificity, the single term "specificity.tw." (specificity of 98.2%) outperformed combinations of terms. CONCLUSION: Empirically derived search strategies combining indexing terms and textwords can achieve high sensitivity and specificity for retrieving sound diagnostic studies from EMBASE. These search filters will enhance the searching efforts of clinicians

    Developing search strategies for clinical practice guidelines in SUMSearch and Google Scholar and assessing their retrieval performance

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Information overload, increasing time constraints, and inappropriate search strategies complicate the detection of clinical practice guidelines (CPGs). The aim of this study was to provide clinicians with recommendations for search strategies to efficiently identify relevant CPGs in SUMSearch and Google Scholar.</p> <p>Methods</p> <p>We compared the retrieval efficiency (retrieval performance) of search strategies to identify CPGs in SUMSearch and Google Scholar. For this purpose, a two-term GLAD (GuideLine And Disease) strategy was developed, combining a defined CPG term with a specific disease term (MeSH term). We used three different CPG terms and nine MeSH terms for nine selected diseases to identify the most efficient GLAD strategy for each search engine. The retrievals for the nine diseases were pooled. To compare GLAD strategies, we used a manual review of all retrievals as a reference standard. The CPGs detected had to fulfil predefined criteria, e.g., the inclusion of therapeutic recommendations. Retrieval performance was evaluated by calculating so-called diagnostic parameters (sensitivity, specificity, and "Number Needed to Read" [NNR]) for search strategies.</p> <p>Results</p> <p>The search yielded a total of 2830 retrievals; 987 (34.9%) in Google Scholar and 1843 (65.1%) in SUMSearch. Altogether, we found 119 unique and relevant guidelines for nine diseases (reference standard). Overall, the GLAD strategies showed a better retrieval performance in SUMSearch than in Google Scholar. The performance pattern between search engines was similar: search strategies including the term "guideline" yielded the highest sensitivity (SUMSearch: 81.5%; Google Scholar: 31.9%), and search strategies including the term "practice guideline" yielded the highest specificity (SUMSearch: 89.5%; Google Scholar: 95.7%), and the lowest NNR (SUMSearch: 7.0; Google Scholar: 9.3).</p> <p>Conclusion</p> <p>SUMSearch is a useful tool to swiftly gain an overview of available CPGs. Its retrieval performance is superior to that of Google Scholar, where a search is more time consuming, as substantially more retrievals have to be reviewed to detect one relevant CPG. In both search engines, the CPG term "guideline" should be used to obtain a comprehensive overview of CPGs, and the term "practice guideline" should be used if a less time consuming approach for the detection of CPGs is desired.</p

    Feature engineering and a proposed decision-support system for systematic reviewers of medical evidence

    Get PDF
    Objectives: Evidence-based medicine depends on the timely synthesis of research findings. An important source of synthesized evidence resides in systematic reviews. However, a bottleneck in review production involves dual screening of citations with titles and abstracts to find eligible studies. For this research, we tested the effect of various kinds of textual information (features) on performance of a machine learning classifier. Based on our findings, we propose an automated system to reduce screeing burden, as well as offer quality assurance. Methods: We built a database of citations from 5 systematic reviews that varied with respect to domain, topic, and sponsor. Consensus judgments regarding eligibility were inferred from published reports. We extracted 5 feature sets from citations: alphabetic, alphanumeric +, indexing, features mapped to concepts in systematic reviews, and topic models. To simulate a two-person team, we divided the data into random halves. We optimized the parameters of a Bayesian classifier, then trained and tested models on alternate data halves. Overall, we conducted 50 independent tests. Results: All tests of summary performance (mean F3) surpassed the corresponding baseline, P<0.0001. The ranks for mean F3, precision, and classification error were statistically different across feature sets averaged over reviews; P-values for Friedman's test were .045, .002, and .002, respectively. Differences in ranks for mean recall were not statistically significant. Alphanumeric+ features were associated with best performance; mean reduction in screening burden for this feature type ranged from 88% to 98% for the second pass through citations and from 38% to 48% overall. Conclusions: A computer-assisted, decision support system based on our methods could substantially reduce the burden of screening citations for systematic review teams and solo reviewers. Additionally, such a system could deliver quality assurance both by confirming concordant decisions and by naming studies associated with discordant decisions for further consideration. © 2014 Bekhuis et al

    What do evidence-based secondary journals tell us about the publication of clinically important articles in primary healthcare journals?

    Get PDF
    BACKGROUND: We conducted this analysis to determine i) which journals publish high-quality, clinically relevant studies in internal medicine, general/family practice, general practice nursing, and mental health; and ii) the proportion of clinically relevant articles in each journal. METHODS: We performed an analytic survey of a hand search of 170 general medicine, general healthcare, and specialty journals for 2000. Research staff assessed individual articles by using explicit criteria for scientific merit for healthcare application. Practitioners assessed the clinical importance of these articles. Outcome measures were the number of high-quality, clinically relevant studies published in the 170 journal titles and how many of these were published in each of four discipline-specific, secondary "evidence-based" journals (ACP Journal Club for internal medicine and its subspecialties; Evidence-Based Medicine for general/family practice; Evidence-Based Nursing for general practice nursing; and Evidence-Based Mental Health for all aspects of mental health). Original studies and review articles were classified for purpose: therapy and prevention, screening and diagnosis, prognosis, etiology and harm, economics and cost, clinical prediction guides, and qualitative studies. RESULTS: We evaluated 60,352 articles from 170 journal titles. The pass criteria of high-quality methods and clinically relevant material were met by 3059 original articles and 1073 review articles. For ACP Journal Club (internal medicine), four titles supplied 56.5% of the articles and 27 titles supplied the other 43.5%. For Evidence-Based Medicine (general/family practice), five titles supplied 50.7% of the articles and 40 titles supplied the remaining 49.3%. For Evidence-Based Nursing (general practice nursing), seven titles supplied 51.0% of the articles and 34 additional titles supplied 49.0%. For Evidence-Based Mental Health (mental health), nine titles supplied 53.2% of the articles and 34 additional titles supplied 46.8%. For the disciplines of internal medicine, general/family practice, and mental health (but not general practice nursing), the number of clinically important articles was correlated withScience Citation Index (SCI) Impact Factors. CONCLUSIONS: Although many clinical journals publish high-quality, clinically relevant and important original studies and systematic reviews, the articles for each discipline studied were concentrated in a small subset of journals. This subset varied according to healthcare discipline; however, many of the important articles for all disciplines in this study were published in broad-based healthcare journals rather than subspecialty or discipline-specific journals

    A cross-sectional study of the number and frequency of terms used to refer to knowledge translation in a body of health literature in 2006: a Tower of Babel?

    Get PDF
    <p/> <p>Background</p> <p>The study of implementing research findings into practice is rapidly growing and has acquired many competing names (<it>e.g</it>., dissemination, uptake, utilization, translation) and contributing disciplines. The use of multiple terms across disciplines pose barriers to communication and progress for applying research findings. We sought to establish an inventory of terms describing this field and how often authors use them in a collection of health literature published in 2006.</p> <p>Methods</p> <p>We refer to this field as knowledge translation (KT). Terms describing aspects of KT and their definitions were collected from literature, the internet, reports, textbooks, and contact with experts. We compiled a database of KT and other articles by reading 12 healthcare journals representing multiple disciplines. All articles published in these journals in 2006 were categorized as being KT or not. The KT articles (all KT) were further categorized, if possible, for whether they described KT projects or implementations (KT application articles), or presented the theoretical basis, models, tools, methods, or techniques of KT (KT theory articles). Accuracy was checked using duplicate reading. Custom designed software determined how often KT terms were used in the titles and abstracts of articles categorized as being KT.</p> <p>Results</p> <p>A total of 2,603 articles were assessed, and 581 were identified as KT articles. Of these, 201 described KT applications, and 153 included KT theory. Of the 100 KT terms collected, 46 were used by the authors in the titles or abstracts of articles categorized as being KT. For all 581 KT articles, eight terms or term variations used by authors were highly discriminating for separating KT and non-KT articles (p < 0.001): implementation, adoption, quality improvement, dissemination, complex intervention (with multiple endings), implementation (within three words of) research, and complex intervention. More KT terms were associated with KT application articles (n = 13) and KT theory articles (n = 18).</p> <p>Conclusions</p> <p>We collected 100 terms describing KT research. Authors used 46 of them in titles and abstracts of KT articles. Of these, approximately half discriminated between KT and non-KT articles. Thus, the need for consolidation and consistent use of fewer terms related to KT research is evident.</p

    Systematic reviews: a cross-sectional study of location and citation counts

    Get PDF
    BACKGROUND: Systematic reviews summarize all pertinent evidence on a defined health question. They help clinical scientists to direct their research and clinicians to keep updated. Our objective was to determine the extent to which systematic reviews are clustered in a large collection of clinical journals and whether review type (narrative or systematic) affects citation counts. METHODS: We used hand searches of 170 clinical journals in the fields of general internal medicine, primary medical care, nursing, and mental health to identify review articles (year 2000). We defined 'review' as any full text article that was bannered as a review, overview, or meta-analysis in the title or in a section heading, or that indicated in the text that the intention of the authors was to review or summarize the literature on a particular topic. We obtained citation counts for review articles in the five journals that published the most systematic reviews. RESULTS: 11% of the journals concentrated 80% of all systematic reviews. Impact factors were weakly correlated with the publication of systematic reviews (R(2 )= 0.075, P = 0.0035). There were more citations for systematic reviews (median 26.5, IQR 12 – 56.5) than for narrative reviews (8, 20, P <.0001 for the difference). Systematic reviews had twice as many citations as narrative reviews published in the same journal (95% confidence interval 1.5 – 2.7). CONCLUSIONS: A few clinical journals published most systematic reviews. Authors cited systematic reviews more often than narrative reviews, an indirect endorsement of the 'hierarchy of evidence'
    corecore